37 research outputs found

    Measuring Visual Consistency in 3D Rendering Systems

    Get PDF
    One of the major challenges facing a present day game development company is the removal of bugs from such complex virtual environments. This work presents an approach for measuring the correctness of synthetic scenes generated by a rendering system of a 3D application, such as a computer game. Our approach builds a database of labelled point clouds representing the spatiotemporal colour distribution for the objects present in a sequence of bug-free frames. This is done by converting the position that the pixels take over time into the 3D equivalent points with associated colours. Once the space of labelled points is built, each new image produced from the same game by any rendering system can be analysed by measuring its visual inconsistency in terms of distance from the database. Objects within the scene can be relocated (manually or by the application engine); yet the algorithm is able to perform the image analysis in terms of the 3D structure and colour distribution of samples on the surface of the object. We applied our framework to the publicly available game RacingGame developed for Microsoft(R) Xna(R). Preliminary results show how this approach can be used to detect a variety of visual artifacts generated by the rendering system in a professional quality game engine

    A toolbox for animal call recognition

    Get PDF
    Monitoring the natural environment is increasingly important as habit degradation and climate change reduce theworld’s biodiversity.We have developed software tools and applications to assist ecologists with the collection and analysis of acoustic data at large spatial and temporal scales.One of our key objectives is automated animal call recognition, and our approach has three novel attributes. First, we work with raw environmental audio, contaminated by noise and artefacts and containing calls that vary greatly in volume depending on the animal’s proximity to the microphone. Second, initial experimentation suggested that no single recognizer could dealwith the enormous variety of calls. Therefore, we developed a toolbox of generic recognizers to extract invariant features for each call type. Third, many species are cryptic and offer little data with which to train a recognizer. Many popular machine learning methods require large volumes of training and validation data and considerable time and expertise to prepare. Consequently we adopt bootstrap techniques that can be initiated with little data and refined subsequently. In this paper, we describe our recognition tools and present results for real ecological problems

    A Primal-Dual Algorithm for Link Dependent Origin Destination Matrix Estimation

    Full text link
    Origin-Destination Matrix (ODM) estimation is a classical problem in transport engineering aiming to recover flows from every Origin to every Destination from measured traffic counts and a priori model information. In addition to traffic counts, the present contribution takes advantage of probe trajectories, whose capture is made possible by new measurement technologies. It extends the concept of ODM to that of Link dependent ODM (LODM), keeping the information about the flow distribution on links and containing inherently the ODM assignment. Further, an original formulation of LODM estimation, from traffic counts and probe trajectories is presented as an optimisation problem, where the functional to be minimized consists of five convex functions, each modelling a constraint or property of the transport problem: consistency with traffic counts, consistency with sampled probe trajectories, consistency with traffic conservation (Kirchhoff's law), similarity of flows having close origins and destinations, positivity of traffic flows. A primal-dual algorithm is devised to minimize the designed functional, as the corresponding objective functions are not necessarily differentiable. A case study, on a simulated network and traffic, validates the feasibility of the procedure and details its benefits for the estimation of an LODM matching real-network constraints and observations

    Probabilistic travel time progression and its application to automatic vehicle identification data

    Get PDF
    Travel time has been identified as an important variable to evaluate the performance of a transportation system. Based on the travel time prediction, road users can make their optimal decision in choosing route and departure time. In order to utilise adequately the advanced data collection methods that provide real-time different types of information, this paper is aimed at a novel approach to the estimation of long roadway travel times, using Automatic Vehicle Identification (AVI) technology. Since the long roads contain a large number of scanners, the AVI sample size tends to reduce and, as such, computing the distribution for the total road travel time becomes difficult. In this work, we introduce a probabilistic framework that extends the deterministic travel time progression method to dependent random variables and enables the off-line estimation of road travel time distributions. In the proposed method, the accuracy of the estimation does not depend on the size of the sample over the entire corridor, but only on the amount of historical data that is available for each link. In practice, the system is also robust to small link samples and can be used to detect outliers within the AVI data

    Computational approaches to the visual validation of 3D virtual environments

    Get PDF
    Virtual environments can provide, through digital games and online social interfaces, extremely exciting forms of interactive entertainment. Because of their capability in displaying and manipulating information in natural and intuitive ways, such environments have found extensive applications in decision support, education and training in the health and science domains amongst others. Currently, the burden of validating both the interactive functionality and visual consistency of a virtual environment content is entirely carried out by developers and play-testers. While considerable research has been conducted in assisting the design of virtual world content and mechanics, to date, only limited contributions have been made regarding the automatic testing of the underpinning graphics software and hardware. The aim of this thesis is to determine whether the correctness of the images generated by a virtual environment can be quantitatively defined, and automatically measured, in order to facilitate the validation of the content. In an attempt to provide an environment-independent definition of visual consistency, a number of classification approaches were developed. First, a novel model-based object description was proposed in order to enable reasoning about the color and geometry change of virtual entities during a play-session. From such an analysis, two view-based connectionist approaches were developed to map from geometry and color spaces to a single, environment-independent, geometric transformation space; we used such a mapping to predict the correct visualization of the scene. Finally, an appearance-based aliasing detector was developed to show how incorrectness too, can be quantified for debugging purposes. Since computer games heavily rely on the use of highly complex and interactive virtual worlds, they provide an excellent test bed against which to develop, calibrate and validate our techniques. Experiments were conducted on a game engine and other virtual worlds prototypes to determine the applicability and effectiveness of our algorithms. The results show that quantifying visual correctness in virtual scenes is a feasible enterprise, and that effective automatic bug detection can be performed through the techniques we have developed. We expect these techniques to find application in large 3D games and virtual world studios that require a scalable solution to testing their virtual world software and digital content

    A framework for the semi-automatic testing of video games

    Get PDF
    Game environments are complex interactive systems that require extensive analysis and testing to ensure that they are at a high enough quality to be released commercially. In particular, the last build of the product needs an additional and extensive beta test carried out by people that play the game in order to establish its robustness and playability. This entails additional costs from the viewpoint of a company as it requires the hiring of play testers. In the present work we propose a general software framework that integrates Artificial Intelligence (AI) Agents and Computer Vision (CV) technologies to support the test team and help to improve and accelerate the test process. We also present a prototype shadow alias detection algorithm that illustrates the effectiveness of the framework in developing automated visual debugging technology that will ease the heavy cost of beta testing games

    A physically sound vehicle-driver model for realistic microscopic simulation

    Get PDF
    In microscopic traffic simulators, the interaction between vehicles is considered. The dynamics of the system then becomes an emergent property of the interaction between its components. Such interactions include lane-changing, car-following behaviours and intersection management. Although, in some cases, such simulators produce realistic prediction, they do not allow for an important aspect of the dynamics, that is, the driver-vehicle interaction. This paper introduces a physically sound vehicle-driver model for realistic microscopic simulation. By building a nanoscopic traffic simulation model that uses steering angle and throttle position as parameters, the model aims to overcome unrealistic acceleration and deceleration values, as found in various microscopic simulation tools. A physics engine calculates the driving force of the vehicle, and the preliminary results presented here, show that, through a realistic driver-vehicle-environment simulator, it becomes possible to model realistic driver and vehicle behaviours in a traffic simulation

    Traffic state estimation from partial Bluetooth and volume observations: case study in the Brisbane metropolitan area

    Get PDF
    The application of the Bluetooth (BT) technology to transportation has been enabling researchers to make accurate travel time observations, in freeway and arterial roads. The Bluetooth traffic data are generally incomplete, for they only relate to those vehicles that are equipped with Bluetooth devices, and that are detected by the Bluetooth sensors of the road network. The fraction of detected vehicles versus the total number of transiting vehicles is often referred to as Bluetooth Penetration Rate (BTPR). The aim of this study is to precisely define the spatio-temporal relationship between the quantities that become available through the partial, noisy BT observations; and the hidden variables that describe the actual dynamics of vehicular traffic. To do so, we propose to incorporate a multi- class traffic model into a Sequential Montecarlo Estimation algorithm. Our framework has been applied for the empirical travel time investigations into the Brisbane Metropolitan region

    Noisy Bluetooth traffic data?

    Get PDF
    Traffic state estimation in an urban road network remains a challenge for traffic models and the question of how such a network performs remains a difficult one to answer for traffic operators. Lack of detailed traffic information has long restricted research in this area. The introduction of Bluetooth into the automotive world presented an alternative that has now developed to a stage where large-scale test-beds are becoming available, for traffic monitoring and model validation purposes. But how much confidence should we have in such data? This paper aims to give an overview of the usage of Bluetooth, primarily for the city-scale management of urban transport networks, and to encourage researchers and practitioners to take a more cautious look at what is currently understood as a mature technology for monitoring travellers in urban environments. We argue that the full value of this technology is yet to be realised, for the analytical accuracies peculiar to the data have still to be adequately resolved
    corecore